Goto

Collaborating Authors

 Issues


The White House proposes new AI policy framework that supersedes state laws

Engadget

The framework includes proposals for child privacy protections, fewer restrictions around data center buildout and vague ideas about IP licensing. The White House has announced a new AI policy framework that calls for Congress to craft federal regulation that overrules state AI laws. The Trump administration has made multiple attempts to overrule more restrictive state-level AI regulation, but has failed so far, most notably in the passing of the "One Big Beautiful Bill." The framework focuses on a variety of topics, covering everything from child privacy to the use of AI in the workforce. "Importantly, this framework can succeed only if it is applied uniformly across the United States," The White House writes.


White House unveils its first national AI framework, pushes Congress to act 'this year'

FOX News

The White House unveiled its first federal AI policy framework Friday, with officials Michael Kratsios and David Sacks urging Congress to pass a national standard this year.


'Dune' tried to warn us against AI

Popular Science

Technology Internet Social Media'Dune' tried to warn us against AI'Thou shalt not make a machine in the likeness of a human mind.' AI is illegal in the'Dune' universe, but not for the reasons you may think. Breakthroughs, discoveries, and DIY tips sent six days a week. Even the biggest fans of know that Frank Herbert's classic sci-fi epic quickly veers into the fantastical. Giant subterranean sandworms measuring 1,500 feet long; a narcotic that fuels interstellar travel and bends a user's perception of space-time; a mystical cabal of eugenicist witches--the list goes on.


'100 Video Calls Per Day': Models Are Applying to Be the Face of AI Scams

WIRED

'100 Video Calls Per Day': Models Are Applying to Be the Face of AI Scams Dozens of Telegram channels reviewed by WIRED include job listings for "AI face models." The (mostly) women who land these gigs are likely being used to dupe victims out of their money. "I can speak fluent English, I can speak good Chinese, I also speak Russian and Turkish," the glamorous, 24-year-old Uzbekistani woman explains in a selfie-style video made for recruiters. Angel had arrived in the Cambodian city of Sihanoukville that day, she said, and was ready to start work immediately. Those impressive language skills, however, have likely been put to use as part of elaborate " pig-butchering " scams targeting Americans.



The Machine Ethics podcast: moral agents with Jen Semler

AIHub

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This month, Ben met in-person with Jen Semler. Jen Semler is a Postdoctoral Fellow at Cornell Tech's Digital Life Initiative. Her research focuses on the intersection of ethics, technology, and moral agency. She holds a DPhil (PhD) in philosophy from the University of Oxford.


The greatest risk of AI in higher education isn't cheating – it's the erosion of learning itself

AIHub

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating . Will students use chatbots to write essays? Should universities ban the tech? But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom. Universities are adopting AI across many areas of institutional life .


The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

AIHub

Hosted by Eleanor Drage and Kerry McInerney, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. In this episode, we talk to Tomasz Hollanek, researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems. The conversation examines the importance of AI literacy, the responsibilities of journalists in reporting on AI technologies, and how design choices embed social and political values into AI. Together, we reflect on how critical design can challenge existing power dynamics and open up more just and inclusive approaches to human-AI interaction.


Studying multiplicity: an interview with Prakhar Ganesh

AIHub

In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We sat down with Prakhar Ganesh to learn about his work on responsible AI, which is focussed on the concept of multiplicity. We found out more about some of the projects he's been involved in, his future plans, and how he got into the field. Could you start with a quick introduction to yourself, where you're studying, and the broad topic of your research? My name is Prakhar Ganesh. I'm also affiliated with Mila, which is a research institute in Montreal. My supervisor is Professor Golnoosh Farnadi.


RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

AIHub

RWDS Big Questions: how do we balance innovation and regulation in the world of AI? AI development is accelerating, while regulation moves more deliberately. That tension creates a core challenge: how do we maintain momentum without breaking the things that matter? The aim isn't to slow innovation unnecessarily, but to ensure progress happens at a pace that protects individuals and society. Responsible actors should not be disadvantaged -- yet safeguards are essential to maintain trust. For the latest video in our RWDS Big Questions series, our panel explores this delicate balance.